Goto

Collaborating Authors

 proxy utility function



b607ba543ad05417b8507ee86c54fcb7-Paper.pdf

Neural Information Processing Systems

Inhuman principal--agent problems, seemingly inconsequential changes toanagent'sincentives often lead to surprising, counter-intuitive, and counter-productive behavior (21). Consequently, we must ask when thismisalignment is costly: when is it counter-productive to optimize for an incompleteproxy?


Avoiding the Midas Touch: Consequences of Misaligned AI Supplementary Material

Neural Information Processing Systems

This document contains theorem proofs and algorithms for Avoiding the Midas Touch: Consequences of Misaligned AI . Some parts of the main text are repeated for completeness. In this section, we formalize the problem presented in the introduction in the context of objective function design for AI agents. If they could simply express the entirety of their preferences to the robot, there would not be value misalignment. Unfortunately, there are many aspects of the world about which the human cares, and it is intractable to enumerate this complete set to the robot.



Consequences of Misaligned AI

Zhuang, Simon, Hadfield-Menell, Dylan

arXiv.org Artificial Intelligence

AI systems often rely on two key components: a specified goal or reward function and an optimization algorithm to compute the optimal behavior for that goal. This approach is intended to provide value for a principal: the user on whose behalf the agent acts. The objectives given to these agents often refer to a partial specification of the principal's goals. We consider the cost of this incompleteness by analyzing a model of a principal and an agent in a resource constrained world where the $L$ attributes of the state correspond to different sources of utility for the principal. We assume that the reward function given to the agent only has support on $J < L$ attributes. The contributions of our paper are as follows: 1) we propose a novel model of an incomplete principal-agent problem from artificial intelligence; 2) we provide necessary and sufficient conditions under which indefinitely optimizing for any incomplete proxy objective leads to arbitrarily low overall utility; and 3) we show how modifying the setup to allow reward functions that reference the full state or allowing the principal to update the proxy objective over time can lead to higher utility solutions. The results in this paper argue that we should view the design of reward functions as an interactive and dynamic process and identifies a theoretical scenario where some degree of interactivity is desirable.